Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep neural network compression algorithm based on combined dynamic pruning
ZHANG Mingming, LU Qingning, LI Wenzhong, SONG Hu
Journal of Computer Applications    2021, 41 (6): 1589-1596.   DOI: 10.11772/j.issn.1001-9081.2020121914
Abstract322)      PDF (1131KB)(314)       Save
As a branch of model compression, network pruning algorithm reduces the computational cost by removing unimportant parameters in the deep neural network. However, permanent pruning will cause irreversible loss of the model capacity. Focusing on this issue, a combined dynamic pruning algorithm was proposed to comprehensively analyze the characteristics of the convolution kernel and the input image. Part of the convolution kernels were zeroized and allowed to be updated during the training process until the network converged, thereafter the zeroized kernels would be permanently removed. At the same time, the input images were sampled to extract their features, then a channel importance prediction network was used to analyze these features to determine the channels able to be skipped during the convolution operation. Experimental results based on M-CifarNet and VGG16 show that the combined dynamic pruning can respectively provide 2.11 and 1.99 floating-point operation compression ratios, with less than 0.8 percentage points and 1.2 percentage points accuracy loss respectively compared to the benchmark model (M-CifarNet、VGG16). Compared with the existing network pruning algorithms, the combined dynamic pruning algorithm effectively reduces the Floating-Point Operations Per second (FLOPs) and the parameter scale of the model, and achieves the higher accuracy under the same compression ratio.
Reference | Related Articles | Metrics
Image description generation algorithm based on improved attention mechanism
LI Wenhui, ZENG Shangyou, WANG Jinjin
Journal of Computer Applications    2021, 41 (5): 1262-1267.   DOI: 10.11772/j.issn.1001-9081.2020071078
Abstract496)      PDF (1413KB)(903)       Save
Image description is to express the global information contained in the image in sentences. It requires that the image description generation model can extract image information and express the extracted image information in sentences. The traditional model is based on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), which can realize the function of image-to-sentence translation to a certain extent. However, this model has low accuracy and training speed when extracting key information of the image. To solve this problem, an improved attention mechanism image description generation model based on CNN and Long Short-Term Memory (LSTM) network was proposed. VGG19 and ResNet101 were used as the feature extraction networks, and group convolution was introduced into the attention mechanism to replace the traditional fully connected operation, so as to improve the evaluation indices.The model was trained by public datasets Flickr8K and Flickr30K and validated by various evaluation indices (BLEU(Bilingual Evaluation Understudy), ROUGE_L(Recall-Oriented Understudy for Gisting Evaluation), CIDEr(Consensus-based Image Description Evaluation), METEOR(Metric for Evaluation of Translation with Explicit Ordering)). Experimental results show that compared with the model with traditional attention mechanism, the proposed improved image description generation model with attention mechanism improves the accuracy of the image description task, and this model is better than the traditional model on all the four evaluation indices.
Reference | Related Articles | Metrics
Binocular vision target positioning method based on coarse-fine stereo matching
MA Weiping, LI Wenxin, SUN Jinchuan, CAO Pengxia
Journal of Computer Applications    2020, 40 (1): 227-232.   DOI: 10.11772/j.issn.1001-9081.2019071010
Abstract393)      PDF (996KB)(312)       Save
In order to solve the problem of low positioning accuracy of binocular vision system, a binocular vision target positioning method based on coarse-fine stereo matching was proposed. The coarse-fine matching strategy was adopted in the proposed method, firstly the random fern algorithm based on Canny-Harris feature points was used to identify the target in the left and right images at the stage of coarse matching, and the center points of target rectangular regions were extracted to achieve the center matching. Then, a binary feature descriptor based on image gradient information was established at the stage of fine matching, and the right center point obtained by center matching was used as an estimated value to set a pixel search range, in which the best matching point of left center point was found. Finally, the center matching points were substituted into the mathematical model of parallel binocular vision to achieve target positioning. The experimental results show that the proposed method has the positioning error controlled in 7 mm within 500 mm distance, and the average relative positioning error of 2.53%. Compared with other methods, the proposed method has the advantages of high positioning accuracy and short running time.
Reference | Related Articles | Metrics
Multi-objective optimization model and solution algorithm for emergency material transportation path
LI Zhuo, LI Yinzhen, LI Wenxia
Journal of Computer Applications    2019, 39 (9): 2765-2771.   DOI: 10.11772/j.issn.1001-9081.2019020270
Abstract1013)      PDF (983KB)(431)       Save

For the actual background of the shortage of self-owned vehicles of the transporters in the early stage of emergency, the combinatorial optimization problem of hybrid vehicle paths with transportation mode of joint distribution of self-owned vehicles and vehicles rented by third-party was studied. Firstly, with the different interests between demand points and transporters considered, a multi-objective hybrid vehicle routing optimization model with soft time windows was established with the goal of maximizing system satisfaction and minimizing system delivery time and total cost. Secondly, the shortcomings of NSGA-Ⅱ algorithm in solving this kind of problems such as poor convergence and uneven distribution of Pareto frontiers were considered, the heuristic strategy and pheromone positive feedback mechanism of ant colony algorithm were used to generate offspring population, non-dominated sorting strategy model was used to guide the multi-objective optimization process, and the variable neighborhood descent search was introduced to expand the search space. A multi-objective non-dominated sorting ant colony algorithm was proposed to break through the bottleneck of the original algorithm. The example shows that the proposed model can provide reference for decision makers to choose reasonable paths according to different optimization objectives in different situations, and the proposed algorithm shows better performance in solving different scale problems and different distribution type problems.

Reference | Related Articles | Metrics
Surface scratch recognition method based on deep neural network
LI Wenjun, CHEN Bin, LI Jianming, QIAN Jide
Journal of Computer Applications    2019, 39 (7): 2103-2108.   DOI: 10.11772/j.issn.1001-9081.2018112247
Abstract536)      PDF (997KB)(336)       Save

In order to achieve robust, accurate and real-time recognition of surface scratches under complex texture background with uneven brightness, a surface scratch recognition method based on deep neural network was proposed. The deep neural network for surface scratch recognition consisted of a style transfer network and a focus Convolutional Neural Network (CNN). The style transfer network was used to preprocess surface scratches under complex background with uneven brightness. The style transfer networks included a feedforward conversion network and a loss network. Firstly, the style features of uniform brightness template and the perceptual features of the detected image were extracted through the loss network, and the feedforward conversion network was trained offline to obtain the optimal parameter values of network. Then, the images with uniform brightness and uniform style were generated by style transfer network. Finally, the proposed focus convolutional neural network based on focus structure was used to extract and recognize scratch features in the generated image. Taking metal surface with light change as an example, the scratch recognition experiment was carried out. The experimental results show that compared with traditional image processing methods requiring artificial designed features and traditional deep convolutional neural network, the false negative rate of scratch detection is as low as 8.54% with faster convergence speed and smoother convergence curve, and the better detection results can be obtained under different depth models with accuracy increased of about 2%. The style transfer network can retain complete scratch features with the problem of uneven brightness solved, thus improving the accuracy of scratch recognition, while the focus convolutional neural network can achieve robust, accurate and real-time recognition of scratches, which greatly reduces false negative rate and false positive rate of scratches.

Reference | Related Articles | Metrics
Discretization process of coupled Logistic fractional-order differential equation
LIU Shanshan, GAO Fei, LI Wenqin
Journal of Computer Applications    2019, 39 (1): 305-310.   DOI: 10.11772/j.issn.1001-9081.2018040848
Abstract446)      PDF (871KB)(312)       Save
Focusing on the problem of solving coupled Logistic fractional-order differential equation, a discretization method was introduced to solve it discretly. Firstly, a coupled Logistic integer-order differential equation was introduced into the fields of fractional-order calculus. Secondly, the corresponding coupled Logistic fractional-order differential equation with piecewise constant arguments was analyzed and the proposed discretization method was applied to solve the model numerically. Then, according to the fixed point theory, the stability of the fixed point of the synthetic dynamic system was discussed, and the boundary equation of the first bifurcation of the coupled Logistic fractional-order system in the parameter space was given. Finally, the model was numerically simulated by Matlab, and more complex dynamics phenomena of model were discussed with Lyapunov index, phase diagram, time series diagram and bifurcation diagram. The simulation results show that, the proposed method is successful in discretizing coupled Logistic fractional-order differential equation.
Reference | Related Articles | Metrics
Hybrid feature selection algorithm fused Shapley value and particle swarm optimization
DENG Xiuqin, LI Wenzhou, WU Jigang, LIU Taiheng
Journal of Computer Applications    2018, 38 (5): 1245-1249.   DOI: 10.11772/j.issn.1001-9081.2017112730
Abstract524)      PDF (774KB)(419)       Save
Concerning the problem that data often has irrelevant or redundant features which affect the classification accuracy in pattern classification problems, a hybrid feature selection method based on Shapley value and Particle Swarm Optimization (PSO) was proposed to obtain the best classification results with the fewest features. Firstly, the Shapley value of game theory was introduced into the local search of PSO algorithm. Then,by calculating the Shapley value of each feature in the particle (feature subset), the feature with the lowest Shapley value was gradually deleted to optimize the feature subset and update the particle, and enhance the global search ability of the algorithm at the same time. Finally, the improved particle swarm algorithm was applied to feature selection. The classification performance and the number of selected features of the support vector machine classifier were used as feature subset evaluation criteria. The classification experiments were performed on 17 medical data sets with different characteristic quantities of UCI machine learning data sets and gene expression data sets. The experimental results show that the proposed algorithm can remove more than 55% irrelevant or redundant features in the datasets effectively, especially more than 80% in the medium and large datasets, and the selected feature subset also has better classification ability,the classification accuracy can be increased by 2 to 23 percentage points.
Reference | Related Articles | Metrics
Semi-supervised K-means clustering algorithm based on active learning priors
CHAI Bianfang, LYU Feng, LI Wenbin, WANG Yao
Journal of Computer Applications    2018, 38 (11): 3139-3143.   DOI: 10.11772/j.issn.1001-9081.2018041251
Abstract734)      PDF (827KB)(402)       Save
Iteration-based Active Semi-Supervised Clustering Framework (IASSCF) is a popular semi-supervised clustering framework. There are two problems in this framework. The initial prior information is too less, which leads to poor clustering results in the initial iteration and infects the subsequent clustering. In addition, in each iteration only the sample with the largest information is selected to label, which results in a slow speed and improvement of the performance. Aiming to the existing problems, a semi-supervised K-means clustering algorithm based on active learning priors was designed, which consisted of initializing phase and iterating phase. In the initializing phase, the representative samples were selected actively to build an initial neighborhood set and a constraint set. Each iteration in iterating phase includes three steps:1) Pairwise Constrained K-means (PCK-means) was used to cluster data based on the current constraints. 2) Unlabeled samples with the largest information in each cluster were selected based on the clustering results. 3) The selected samples were extended into the neighborhood set and the constraint set. The iterating phase ends until the convergence thresholds were reached. The experimental results show that the proposed algorithm runs faster and has better performance than the algorithm based on the original IASSCF framework.
Reference | Related Articles | Metrics
Network architecture design of smart substation based on software defined network
HUANG Xin, LI Qin, YANG Gui, ZHU Zhihan, LI Wenmeng, SHI Yuxiang
Journal of Computer Applications    2017, 37 (9): 2512-2517.   DOI: 10.11772/j.issn.1001-9081.2017.09.2512
Abstract522)      PDF (967KB)(450)       Save
With the improvement of standardization and intelligence level of secondary equipment, a kind of communication network more efficient and smarter is needed in smart substation to meet the substation operation and maintenance requirements, to achieve equipment plug and play, intelligent monitoring, subnet secure isolation and element interchange. For the application needs of substation network unified management, security isolation between subnets and equipment compatibility and interchangeability, a Software Defined Network (SDN)-based substation network architecture was proposed. IEC 61850 and OpenFlow protocols were used for network architecture design. OpenFlow controller was used to control and isolate the individual subnets to implement network device management and subnet secure isolation. The experimental results show that precise traffic control based on service types, and securely data isolation can be implemented with the proposed substation SDN-based network architecture. It has a very important application value for promoting the operation and maintenance level of smart substation.
Reference | Related Articles | Metrics
High efficient construction of location fingerprint database based on matrix completion improved by backtracking search optimization
LI Lina, LI Wenhao, YOU Hongxiang, WANG Yue
Journal of Computer Applications    2017, 37 (7): 1893-1899.   DOI: 10.11772/j.issn.1001-9081.2017.07.1893
Abstract470)      PDF (1047KB)(443)       Save
To solve the problems existing in the off-line construction method of location fingerprint database for location fingerprint positioning based on Received Signal Strength Indication (RSSI), including large workload of collecting all the fingerprint information in the location, low construction efficiency of the location fingerprint database, and the limited precision of interpolation, a high efficient off-line construction method of the location fingerprint database based on the Singular Value Thresholding (SVT) Matrix Completion (MC) algorithm improved by the Backtracking Search optimization Algorithm (BSA) was proposed. Firstly, using the collected location fingerprint data of some reference nodes, a low-rank matrix completion model was established. Then the model was solved by the low rank MC algorithm based on the SVT. Finally, the complete location fingerprint database could be reconstructed in the location area. At the same time, the BSA was introduced to improve the optimization process of MC algorithm with the minimum kernel norm as the fitness function to solve the problem of the fuzzy optimal solution and the poor smoothness of the traditional MC theory, which could further improve the accuracy of the solution. The experimental results show that the average error between the location fingerprint database constructed by the proposed method and the actual collected location fingerprint database is only 2.7054 dB, and the average positioning error is only 0.0863 m, but nearly 50% of the off-line collection workload can be saved. The above results show that the proposed off-line construction method of the location fingerprint database can effectively reduce the workload of off-line collection stage while ensuring the accuracy, significantly improve the construction efficiency of location fingerprint database, and improve the practicability of fingerprint positioning method to a certain extent.
Reference | Related Articles | Metrics
Meta-learning based optimization algorithm selection framework and its empirical study
CUI Jianshuang, LIU Xiaochan, YANG Meihua, LI Wenyan
Journal of Computer Applications    2017, 37 (4): 1105-1110.   DOI: 10.11772/j.issn.1001-9081.2017.04.1105
Abstract453)      PDF (1014KB)(480)       Save
The goal of algorithm selection is to automatically select the best suitable algorithm for current problem from a batch of available algorithms. For this purpose, an intelligent recommendation framework based on meta-learning approach was presented. The automatic selection procedure for Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Simulated Annealing (SA) was designed according to this framework by using Multi-mode Resource-Constrained Project Scheduling Problem (MRCPSP) as the validation data set. Three hundred and seventy-eight instances of MRCPSP were randomly picked out from the Project Scheduling Problem Library (PSPLib), and the inherent and statistic features of each instance were extracted and used as the metadata, then the prediction meta-model for new examples was obtained by using Feed-forward Neural Network (FNN) algorithm. The empirical results demonstrate that the hit rate reaches 95% at most, and the average hit rate is 85% when choosing one algorithm from two ones; the best hit rate reaches 92% and 80% respectively when choosing one algorithm from three ones. The proposed intelligent recommendation framework is successful and the automatic selection for optimization algorithms is feasible.
Reference | Related Articles | Metrics
Probabilistic bichromatic reverse- kNN query on road network
XU Wei, LI Wengen, ZHANG Yichao, GUAN Jihong
Journal of Computer Applications    2017, 37 (2): 341-346.   DOI: 10.11772/j.issn.1001-9081.2017.02.0341
Abstract614)      PDF (877KB)(524)       Save

Considering the road network constraint and the uncertainty of moving object location, a new reverse-kNN query on road network termed Probabilistic Bichromatic Reverse-kNN (PBRkNN) was proposed to find a set of uncertain points and make the probability which the kNN of each uncertain point contains the given query point be greater than a specified threshold. Firstly, a basic algorithm called Probabilistic Eager (PE) was proposed, which used Dijkstra algorithm for pruning. Then, the Pre-compute Probabilistic Eager (PPE) algorithm which pre-computes the kNN for each point was proposed to improve the query efficiency. In addition, for further improving the query efficiency, the Pre-compute Probabilistic Eager External (PPEE) algorithm which used grid index to accelerate range query was proposed. The experimental results on the road networks of Beijing and California show that the proposed pre-computation strategies can help to efficiently process probabilistic bichromatic reverse-kNN queries on road networks.

Reference | Related Articles | Metrics
Active semi-supervised community detection method based on link model
CHAI Bianfang, WANG Jianling, XU Jiwei, LI Wenbin
Journal of Computer Applications    2017, 37 (11): 3090-3094.   DOI: 10.11772/j.issn.1001-9081.2017.11.3090
Abstract477)      PDF (756KB)(506)       Save
Link model is able to model community detection problem on networks. Compared with other similar models including symmetric models and conditional models, PPL (Popularity and Productivity Link) deals more types of networks, and detects communities more accurately. But PPL is an unsupervised model, and works badly when the network structure is unclear. In addition, PPL is not able to utilize priors that are easily captained. In order to improve its performance by using as less as possible, an Active Node Prior Learning (ANPL) algorithm was provided. ANPL selected the highest utility and easily labeled pairwise constraints, and generated automatically more informative labeled nodes based on the labeled pairwise constraints. Based on the PPL model,a Semi-supervised PPL (SPPL) model was proposed for community detection, which combined the topology of network and node labels learned from the ANPL algorithm. Experiments on synthetic and real networks demonstrate that using node priors from the ANPL algorithm and the topology of a network, SPPL model excels to unsupervised PPL model and popular semi-supervised community detection models based on Non-negative Matrix Factorization (NMF).
Reference | Related Articles | Metrics
Semi-supervised community detection algorithm using active link selection based on iterative framework
CHEN Yiying, CHAI Bianfang, LI Wenbin, HE Yichao, WU Congcong
Journal of Computer Applications    2017, 37 (11): 3085-3089.   DOI: 10.11772/j.issn.1001-9081.2017.11.3085
Abstract513)      PDF (758KB)(518)       Save
In order to solve the problem that large amounts of supervised information was needed to achieve satisfactory performance, owing to the implementation of the semi-supervised community detection methods based on Non-negative Matrix Factorization (NMF) which selected prior information randomly, an Active Link Selection algorithm for semi-supervised community detection based on Graph regularization NMF (ALS_GNMF) was proposed. Firstly, in the iteration framework, the most uncertain and informative links were selected actively as prior information links. Secondly, the must-link constraints of these links, which generated the prior matrix, were added to enhance the connections in a certain community. At the same time, the cannot-link constraints were added, which modified the adjacency matrix, to weaken the connections between communities. Finally, the prior matrix was used as a graph regularization term to incorporate into the optimization objective function of NMF. And combining with network topology information, higher community discovery accuracy and robustness were achieved with less prior information. At the same prior ratio on both synthetic and real networks, experimental results demonstrate that the ALS_GNMF algorithm significantly outperformes the existing semi-supervised NMF algorithms in terms of efficiency, and it is stable especially on networks with unclear structure.
Reference | Related Articles | Metrics
Time-based strategy audit scheme of access control list in multi-layer firewall
WANG Xudong, CHEN Qingping, LI Wen, ZHANG Xinming
Journal of Computer Applications    2017, 37 (1): 212-216.   DOI: 10.11772/j.issn.1001-9081.2017.01.0212
Abstract647)      PDF (840KB)(476)       Save
To solve the Access Control List (ACL) strategic audit problem in multi-layer firewalls, the policy anomalies in single firewall and between multi-layer firewalls were analyzed based on time. Then the Anomaly Detection based on Backtracking Algorithm (ADBA) was proposed by constructing the tree structure according to the topology of firewalls. First, the ACL policy of each firewall was analyzed and the data format was unified to the database. Second, the tree structure of firewall was built based on the topology of the firewall and the anomaly would be detected in a single firewall. Finally, the data in the database and the tree structure was used in ADBA to detect and record the abnormal strategy. The experimental results show that compared with the Semi-isomorphic Marked Firewall Decision Diagram (SMFDD) algorithm, the proposed ADBA can reduce the execution time of anomaly detection by 28.01% and reduce the miscalculation of anomaly detection according to the time factor. The ADBA can be implemented effectively at multi-layer firewalls ACL audit to improve detection accuracy and reduce detection time.
Reference | Related Articles | Metrics
Design and implementation of QoE measuring tool for personal cloud storage services
YUAN Bin, LI Wenwei
Journal of Computer Applications    2016, 36 (9): 2409-2415.   DOI: 10.11772/j.issn.1001-9081.2016.09.2409
Abstract474)      PDF (1117KB)(283)       Save
With the growing requirements of users in network storage, a large number of Personal Cloud Storage (PCS) service platforms are emerging. The Quality of Experience (QoE) perceived by the end users has become the common concerned issues of both end users and service providers. The factors that affect QoE in personal cloud storage were analyzed from the aspects of the different features between control flows and data flows. Several indicators were proposed from the end user's perspectives which can reasonably evaluate the QoE of personal cloud storage. The accurate measurement method of the QoE indicators was designed. The measuring tool for QoE of personal cloud services was implemented based on passive measurement; at the same time, the solution was given to solve the issues in tool implementation like special process capture packets and network flow classification. The experimental result shows that the measuring tool can run robust and obtain accurate results, it can be used to measure the QoE of personal cloud services in terminals.
Reference | Related Articles | Metrics
Multidimensional topic model for oriented sentiment analysis based on long short-term memory
TENG Fei, ZHENG Chaomei, LI Wen
Journal of Computer Applications    2016, 36 (8): 2252-2256.   DOI: 10.11772/j.issn.1001-9081.2016.08.2252
Abstract730)      PDF (784KB)(706)       Save
Concerning the low accuracy of global Chinese microblog sentiment classification, a new model was introduced from the perspective of Multi-dimensional Topics based on Long Short-Term Memory (MT-LSTM). The proposed model was constituted by hierarchical multidimensional sequence computation, it was composed of Long Short-Term Memory (LSTM) cell network and suitable for processing vector, array and higher dimensional data. Firstly, microblog was divided into multiple levels for analysis. To upward spread, sentiment tendencies of words and phrases were analyzed by three-Dimensional Long Short-Term Memory (3D-LSTM); to rightward spread, sentiment tendencies of the whole microblog were analyzed by Multi-Dimensional Long Short-Term Memory (MD-LSTM). Secondly, sentiment tendencies were analyzed by Gaussian distribution in topic sign. Finally, the classification result was obtained by weighting above analyses. The experimental results show that the average precision of the proposed model reached 91%, up to 96.5%, and the recall of the neutral microblog reached 50%. In the comparison experiments with Recursive Neural Network (RNN) model, the F-measure of MT-LSTM was enhanced above 40%; compared with no topic division, the F-measure of MT-LSTM was enhanced by 11.9% because of meticulous topic division. The proposed model has good overall performance, it can effectively improve the accuracy of analyzing Chinese microblog sentiment tendencies and reduce the amount of training data and the complexity of matching calculation.
Reference | Related Articles | Metrics
Two-dimensional direction-of-arrival estimation based on sparse representation of reduced covariance matrix
LI Wenjie, YANG Tao, MEI Yanying
Journal of Computer Applications    2016, 36 (8): 2197-2201.   DOI: 10.11772/j.issn.1001-9081.2016.08.2197
Abstract410)      PDF (726KB)(294)       Save
Since the computational load of Two-Dimensional Direction-Of-Arrival (2D-DOA) estimation using sparse reconstruction is high, a 2D-DOA estimation algorithm based on sparse representation of reduced covariance matrix was proposed. Firstly, the manifold vector matrix redundant dictionary was constructed by using space angle, which maps the azimuth angle and pitch angle from two-dimensional space to one-dimensional space. Consequently, the length of the dictionary and the computational complexity were greatly reduced, and the pitch angle and the azimuth angle could be automatically matched. Secondly, the sampled covariance matrix sparse representation model was improved to reduce its model dimension. Then, constraint residual confidence intervals were obtained by the residual constraint characteristics of the sparse reconstruction of the covariance matrix to avoid the choice of regularization parameters. Finally, the 2D-DOA estimation was realized via convex optimization package. Simulation results show that the incident angle can be accurately estimated when selected covariance matrix column reaches a threshold (the number is 3 in the presence of 2 incident signals). As compared with the feature vector method based on space angle, the estimation accuracy of the proposed method is higher when the Signal-to-Noise Ratio (SNR) is relatively small (<5dB), and is slightly lower under small number of snapshots (<100); both methods have similar estimation accuracy with small angle intervals.
Reference | Related Articles | Metrics
Hadoop adaptive task scheduling algorithm based on computation capacity difference between node sets
ZHU Jie, LI Wenrui, WANG Jiangping, ZHAO Hong
Journal of Computer Applications    2016, 36 (4): 918-922.   DOI: 10.11772/j.issn.1001-9081.2016.04.0918
Abstract505)      PDF (783KB)(460)       Save
Aiming at the problems of the fixed task progress proportions and passive selection of slow tasks in the task speculation execution algorithm for heterogeneous cluster, an adaptive task scheduling algorithm based on the computation capacity difference between node sets was proposed. The computation capacity difference between node sets was quantified to schedule tasks by fast and slow node sets, and dynamic feedback of nodes and tasks speed were calculated to update slow node sets timely to improve the resource utilization rate and task parallelism. Within two node sets, task progress proportions were adjusted dynamically to improve the accuracy of slow tasks identification, and the fast node which executed backup tasks dynamically for slow tasks by substitute execution implementation was selected to improve the task execution efficiency. The experimental results showed that, compared with the Longest Approximate Time to End (LATE) algorithm, the proposed algorithm reduced the running time by 5.21%, 20.51% and 23.86% respectively in short job set, mixed-type job set and mixed-type job set with node performance degradation, and reduced the number of initiated backup tasks significantly. The proposed algorithm can make the task adapt to the node difference, and improves the overall job execution efficiency effectively with reducing slow backup tasks.
Reference | Related Articles | Metrics
Adaptive slicing algorithm to retain model characteristics
LI Wenkang, CHEN Changbo, WU Wenyuan
Journal of Computer Applications    2015, 35 (8): 2295-2300.   DOI: 10.11772/j.issn.1001-9081.2015.08.2295
Abstract470)      PDF (941KB)(391)       Save

To resolve the problem that the existing adaptive slicing algorithm in 3D printing cannot retain effectively model characteristics, a new adaptive slicing method for recognizing and retaining model characteristics was proposed. Firstly, the definition of model characteristic was extended, and the concept of loss and offset of model characteristic was introduced. Secondly, a characteristic recognition method was proposed, the key point of which is to make use of the fact that the surface complexity and number of contours must change around the model characteristics. Finally, based on existing adaptive slicing algorithms, this algorithm retained model characteristics by slicing the model with minimum layer thickness near the model characteristics. On the self-developed software Slicer3DP, the following algorithms were implemented: the uniform slicing algorithm, the adaptive slicing algorithm and the proposed slicing algorithm. By comparing these algorithms, it is found that the proposed slicing algorithm resolves effectively the loss and offset of model characteristics while maintaining both slicing precision and efficiency. The result shows that the proposed method can be used for 3D printing with high precision requirement.

Reference | Related Articles | Metrics
Path optimization algorithm for team navigation based on multiple point collaboration
QIU Jigang, LI Wenlong, YANG Jia
Journal of Computer Applications    2015, 35 (7): 2093-2095.   DOI: 10.11772/j.issn.1001-9081.2015.07.2093
Abstract506)      PDF (608KB)(465)       Save

Concerning the path non-optimization and the delay due to mutual-waiting caused by information island in the team travel, a collaborative path optimization algorithm was proposed which employed centralized computing based on information sharing among team members. The algorithm calculated the optimum navigation path weighted by the factor of meeting priority, taking meeting convenience and path/time shortening into overall consideration. Theoretical analysis shows that the computation complexity increases linearly with the number of team members, and is approximately equal to that of the traditional path optimization algorithm. The simulation results show that the factor of meeting priority has a great influence on optimization path and meeting place. So, the factor of meeting priority needs to be set according to the actual requirement to ensure the dynamic balance between team cooperation and shortening path. A typical application solution of collaborative path optimization algorithm was given to illustrate how to support and to help each other among team members, and to travel together to the destination in order, safely and quickly.

Reference | Related Articles | Metrics
Minimum MPR set selection algorithm based on OLSR protocol
LIU Jie, WANG Ling, WANG Shan, FENG Wei, LI Wen
Journal of Computer Applications    2015, 35 (2): 305-308.   DOI: 10.11772/j.issn.1001-9081.2015.02.0305
Abstract1054)      PDF (798KB)(544)       Save

Aiming at the problem that there is redundancy when using the greedy algorithm to solve the minimum MultiPoint Relay (MPR) set in the traditional Optimized Link State Routing (OLSR) protocol, a Global_OP_MPR algorithm based on the improvement of overall situation was proposed. First, an improved OP_MPR algorithm based on the greedy algorithm was introduced, and this algorithm removed the redundancy by gradually optimizing MPR set, which could simply and efficiently obtain the minimum MPR set; then on the basis of OP_MPR algorithm, the algorithm of Global_OP_MPR added the overall factors into MPR selection criteria to introduce "overall optimization" instead of "local optimization", which could eventually obtain the minimum MPR set in the entire network. Simulations were conducted on the OPNET using Random Waypoint motion model. In the simulation, compared with the traditional OLSR protocol, the OLSR protocol combined with OP_MPR algorithm and Global_OP_MPR algorithm effectively reduced the number of MPR nodes in the entire network, and had less network load to bear Topology Control (TC) grouping number and lower network delay. The simulation results show that the proposed algorithms including OP_MPR and Global_OP_MPR can optimize the size of the MPR set and improve the network performance of the protocol. In addition, due to taking the overall factors into consideration, Global_OP_MPR algorithm achieves a better network performance.

Reference | Related Articles | Metrics
Resource matching maximum set job scheduling algorithm under Hadoop
ZHU Jie, LI Wenrui, ZHAO Hong, LI Ying
Journal of Computer Applications    2015, 35 (12): 3383-3386.   DOI: 10.11772/j.issn.1001-9081.2015.12.3383
Abstract613)      PDF (725KB)(332)       Save
Concerning the problem that jobs of high proportion of resources execute inefficiently in job scheduling algorithms of the present hierarchical queues structure, the resource matching maximum set algorithm was proposed. The proposed algorithm analysed job characteristics, introduced the percentage of completion, waiting time, priority and rescheduling times as urgent value factors. Jobs with high proportion of resources or long waiting time were preferentially considered to improve jobs fairness. Under the condition of limited amount of available resources, the double queues was applied to preferentially select jobs with high urgent values, select the maximum job set from job sets with different proportion of resources in order to achieve scheduling balance. Compared with the Max-min fairness algorithm, it is shown that the proposed algorithm can decrease average waiting time and improve resource utilization. The experimental results show that by using the proposed algorithm, the running time of the same type job set which consisted of jobs of different proportion of resources is reduced by 18.73%, and the running time of jobs of high proportion of resources is reduced by 27.26%; the corresponding percentages of reduction of the running time of the mixed-type job set are 22.36% and 30.28%. The results indicate that the proposed algorithm can effectively reduce the waiting time of jobs of high proportion of resources and improve the overall jobs execution efficiency.
Reference | Related Articles | Metrics
Group trip planning queries on road networks
ZHU Haiquan, LI Wengen, ZHANG Yichao, GUAN Jihong
Journal of Computer Applications    2015, 35 (11): 3146-3150.   DOI: 10.11772/j.issn.1001-9081.2015.11.3146
Abstract429)      PDF (908KB)(514)       Save
Group Trip Planning (GTP) queries are targeting at finding some same activity sites for a group of users (usually expressed as Point of Interests (PoI)), in ordor to minimize the total travel cost. Existing researches on GTP queries are limited in Euclidean space, however, real travel is restricted by road network. Motivated by this observation, two algorithms (NE-GTP and ER-GTP) were designed to solve the GTP queries. NE-GTP expanded the network around every user's location to iteratively find the PoI, while ER-GTP used R-tree index and Euclidean distance to quickly get the results. The experimental results show that ER-GTP always performs on average an order of magnitude processing time faster than NE-GTP. In addition, when the dataset becomes large, ER-GTP also has good scalability.
Reference | Related Articles | Metrics
Adaptive range particle swarm optimization with the Cauchy distributed population
LU Shaohua ZHANG Xiaowei BAO Chengqiang LI Wenbao
Journal of Computer Applications    2014, 34 (4): 1070-1073.   DOI: 10.11772/j.issn.1001-9081.2014.04.1070
Abstract462)      PDF (644KB)(398)       Save

In order to improve the performance of the Particle Swarm Optimization (PSO), an adaptive range PSO with the Cauchy distributed population named ARPSO/C was proposed. The algorithm used the median and scale parameters to adjust self-adaptively the search range in population under the suppose of the individuals obeying the Cauchy distribution, thus balanced between local search and global search. The numerical comparison results on the proposed algorithm, ARPSO and PSO show that the presented algorithm has higher convergence speed and can overcome the prematurity.

Reference | Related Articles | Metrics
Face recognition based on symmetric Gabor features and sparse representation
HE Lingli LI Wenbo
Journal of Computer Applications    2014, 34 (2): 550-552.  
Abstract528)      PDF (442KB)(563)       Save
Inspired by prior knowledge of face images' approximate symmetry, an algorithm based on symmetric Gabor features and sparse representation was proposed, which was successfully applied into face recognition in the paper. At first, mirror transform was performed on face images to get their mirror images, with which the face images could be decomposed into odd-even symmetric faces. Then, Gabor features were extracted from both odd faces and even faces to get the Gabor odd-even symmetric features,which could be fused via a weighting factor to generate the new features. At last, the newly obtained features were combined to form an over-complete dictionary which was used by sparse representation to classify the faces. The experimental results on AR and FERET face databases show that the new method can achieve high accuracy even when face images are under expression, pose and illumination variations.
Related Articles | Metrics
Classification method for interval uncertain data based on improved naive Bayes
LI Wenjin XIONG Xiaofeng MAO Yimin
Journal of Computer Applications    2014, 34 (11): 3268-3272.   DOI: 10.11772/j.issn.1001-9081.2014.11.3268
Abstract155)      PDF (711KB)(548)       Save

Considering the high computation complexity and storage requirement of Naive Bayes (NB) based on Parzen Window Estimation (PWE), especially for classification on interval uncertain data, an improved method named IU-PNBC was proposed for classifying the interval uncertain data. Firstly, Class-Conditional Probability Density Function (CCPDF) was estimated by using PWE. Secondly, an approximate function for CCPDF was obtained by using algebraic interpolation. Finally, the posterior probability was computed and used for classification by using the approximate interpolation function. Artificial simulation data and UCI standard dataset were used to assume the rationality of the proposed algorithm and the affection of the interpolation points to classification accuracy of IU-PNBC. The experimental results show that: when the interpolation points are more than 15, the accuracy of IU-PNBC tends to be stable, and the accuracy increases with the increase of the interpolation points; IU-PNBC can avoid the dependence on the training samples and improve the computation efficiency effectively. Thus, IU-PNBC is suitable for classification on large interval uncertain data with lower computation complexity and storage requirement than NB based on Parzen window estimation.

Reference | Related Articles | Metrics
Three-queue job scheduling algorithm based on Hadoop
ZHU Jie ZHAO Hong LI Wenrui
Journal of Computer Applications    2014, 34 (11): 3227-3230.   DOI: 10.11772/j.issn.1001-9081.2014.11.3227
Abstract184)      PDF (756KB)(524)       Save

Single queue job scheduling algorithm in homogeneous Hadoop cluster causes short jobs waiting and low utilization rate of resources; multi-queue scheduling algorithms solve problems of unfairness and low execution efficiency, but most of them need setting parameters manually, occupy resources each other and are more complex. In order to resolve these problems, a kind of three-queue scheduling algorithm was proposed. The algorithm used job classifications, dynamic priority adjustment, shared resource pool and job preemption to realize fairness, simplify the scheduling flow of normal jobs and improve concurrency. Comparison experiments with First In First Out (FIFO) algorithm were given under three kinds of situations, including that the percentage of short jobs is high, the percentages of all types of jobs are similar, and the general jobs are major with occasional long and short jobs. The proposed algorithm reduced the running time of jobs. The experimental results show that the execution efficiency increase of the proposed algorithm is not obvious when the major jobs are short ones; however, when the assignments of all types of jobs are balanced, the performance is remarkable. This is consistent with the algorithm design rules: prioritizing the short jobs, simplifying the scheduling flow of normal jobs and considering the long jobs, which improves the scheduling performance.

Reference | Related Articles | Metrics
Artificial fish swarm parallel algorithm based on multi-core cluster
LI Shuang LI Wenjing SHUN Huanlong LIN Zhongming
Journal of Computer Applications    2013, 33 (12): 3380-3384.  
Abstract685)      PDF (769KB)(358)       Save
Concerning the problems of low accuracy, limitations of stagnation and slow convergence speed in the later evolution process of Artificial Fish Swarm Algorithm (AFSA), a Parallel Dynamic weigh Niches Artificial Fish Swarm (PDN-AFS) algorithm based on multi-core cluster was proposed. Firstly, the advantages and disadvantages of AFSA were analyzed, and dynamic weighting factor strategy and niche mechanism were adopted, hence a new Dynamic weigh Niches Artificial Fish Swarm (DN-AFS) algorithm was put forward. Then parallel design and analysis of DN-AFS algorithm based on parallel programming model (MPI+OpenMP) were introduced. Finally, the simulation experiments on multi-core cluster environment were given. The experimental results show that PDN-AFS can effectively improve the convergence speed and optimization performance of the complex multimodal function optimization problem, and achieve high speed ratio.
Related Articles | Metrics
New self-adaptive method for image denoising based on sparse decomposition and clustering
WEI Yali WEN Xianbin LIAO Yongchun ZHENG Yongchun
Journal of Computer Applications    2013, 33 (02): 476-479.   DOI: 10.3724/SP.J.1087.2013.00476
Abstract928)      PDF (668KB)(382)       Save
The sparse representations of signal theory has been extensively and deeply researched in recent years, and been widely applied to image processing. For the huge computation of over-complete dictionary structure and sparse decomposition, a new self-adaptive method for image denoising based on sparse decomposition and clustering was proposed. Firstly, an overcomplete dictionary was designed by training samples with a modified K-means clustering algorithm. In the training process, atoms of the dictionary were updated adaptively in every iterative step to better fit the sparse representation of the samples. Secondly, the sparse representation of the test image was obtained by using the dictionary combined with Orthogonal Matching Pursuit (OMP) algorithm, so as to achieve image denoising. The experimental results show that in terms of image denoising and computational complexity, the performance of the proposed method is better than the traditional dictionary training algorithm.
Related Articles | Metrics